wendell wallach
"That Wasn't My Intent": Reenvisioning Ethics in the Information Age
WENDELL WALLACH: It gives me great pleasure to welcome my longtime colleague Shannon Vallor to this Artificial Intelligence & Equality podcast. Shannon and I have both expressed concerns that ethics and ethical philosophy is inadequate for addressing the issues posed by artificial intelligence (AI) and other emerging technologies, so I have been looking forward to our having a conversation about why that is the case and ideas for reenvisioning ethics and empowering it for the information age. Before we get to that conversation, let me introduce Shannon to our listeners, provide a very cursory overview of how ethical theories are understood within academic circles, and provide Shannon with the opportunity to introduce you to the research and insights for which she is best known. Again, before turning to Shannon, let me make sure that listeners have at least a cursory understanding of the field of ethics. Ethical theories are often said to all fall into two big tents, and one of those tents--the determination of what is right, good, or just--derives from following the rules or doing your duty. Often these rules are captured in high-level principles, that the rules can be the Ten Commandments or the four principles of biomedical ethics. In India they might be Yama and Niyama. Each culture has its own set of rules. Even Asimov's "Three Laws of Robotics" do count as rules meant to direct the behavior of robots. All of these theories are said to be deontological, a term going back to the Greeks, referring to duties, and it is basically saying that rules and duties define ethics--but of course there are outstanding questions about whose rules, what to do when rules conflict, and how you deal with situations when people prioritize the rules very differently. At the end of the 18th and beginning of the 19th centuries, Jeremy Bentham, a British philosopher, came up with a totally different approach to ethics, which is sometimes called utilitarianism or consequentialism.
- Asia > India (0.24)
- Oceania > New Zealand (0.04)
- North America > United States > California > Santa Clara County > Santa Clara (0.04)
- Health & Medicine (0.87)
- Government (0.67)
Worried About Robots Taking Over? This Ethics Bot Might Put Your Mind at Ease.
Just how worried should we be about killer robots? To go by the opinions of a highly regarded group of scholars, including Stephen Hawking, Max Tegmark, Franz Wilczek and Stuart Russell, we should be wary of the prospect of artificial intelligence rebelling against its makers. "One can imagine (AI) outsmarting financial markets, outinventing human researchers, outmanipulating human leaders, and developing weapons we cannot even understand," Hawking wrote in a 2014 article for The Independent. "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all." The fear that our irresponsible creations might bring about the end of humanity is a common one.
- Government (0.72)
- Health & Medicine (0.50)
Ethics bots could soothe fears about AI taking control of humanity
Just how worried should we be about killer robots? To go by the opinions of a highly regarded group of scholars, including Stephen Hawking, Max Tegmark, Franz Wilczek, and Stuart Russell, we should be wary of the prospect of artificial intelligence rebelling against its makers. "One can imagine (AI) outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand," Hawking wrote in a 2014 article for The Independent. "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all." The fear that our irresponsible creations might bring about the end of humanity is a common one.
- Government (0.72)
- Health & Medicine (0.50)
THE TECHNOLOGICAL CITIZEN » "Moral Machines" By Wendell Wallach and Collin Allen
In the 2004 film I, Robot, Will Smith's character Detective Spooner harbors a deep grudge for all things technological -- and turns out to be justified after a new generation of robots engage in a full out, summer blockbuster-style revolt against their human creators. Why was Detective Spooner such a Luddite–even before the Robots' vicious revolt? Much of his resentment stems from a car accident he endured in which a robot saved his life instead of a little girl's. The robot's decision haunts Smith's character throughout the movie; he feels the decision lacked emotion, and what one might call'humanity'. "I was the logical choice," he says. "(The robot) calculated that I had a 45% chance of survival. Sarah only had an 11% chance." He continues, dramatically, "But that was somebody's baby. A human being would've known that."
- North America > The Bahamas (0.14)
- Europe (0.14)
- Asia > South Korea (0.14)
- (9 more...)
- Law Enforcement & Public Safety (1.00)
- Energy (1.00)
- Transportation (0.95)
- (4 more...)